Kaggle 鐵打尼號根據乘客的特征(如性別、年齡、船票等級等)來預測他們在沈沒事件中是否生存~
import pandas as pd
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# 加載數據
train_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')
# 提取特征和目標
X = train_data.drop('Survived', axis=1)
y = train_data['Survived']
# 劃分訓練集和驗證集
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# 訓練多個基模型
rf_model = RandomForestClassifier(n_estimators=100, random_state=42)
gb_model = GradientBoostingClassifier(n_estimators=100, random_state=42)
lr_model = LogisticRegression(max_iter=1000, random_state=42)
rf_model.fit(X_train, y_train)
gb_model.fit(X_train, y_train)
lr_model.fit(X_train, y_train)
# 獲取基模型的預測結果
rf_preds = rf_model.predict(X_val)
gb_preds = gb_model.predict(X_val)
lr_preds = lr_model.predict(X_val)
# 使用簡單投票法進行模型融合
ensemble_preds = (rf_preds + gb_preds + lr_preds) // 2
# 計算準確度
ensemble_accuracy = accuracy_score(y_val, ensemble_preds)
print(f'Ensemble Accuracy on Validation Set: {ensemble_accuracy}')
加載了訓練集和測試集的數據,然後用隨機森林、梯度提升樹和邏輯回歸三個模型進行訓練,我們將它們的預測結果進行簡單投票,得到最終的預測結果。最後,我們使用準確度來評估模型在驗證集上的性能~